4 research outputs found
Parallel Algorithms Align with Neural Execution
Neural algorithmic reasoners are parallel processors. Teaching them
sequential algorithms contradicts this nature, rendering a significant share of
their computations redundant. Parallel algorithms however may exploit their
full computational power, therefore requiring fewer layers to be executed. This
drastically reduces training times, as we observe when comparing parallel
implementations of searching, sorting and finding strongly connected components
to their sequential counterparts on the CLRS framework. Additionally, parallel
versions achieve strongly superior predictive performance in most cases.Comment: 8 pages, 5 figures, To appear at the KLR Workshop at ICML 202
Neural Algorithmic Reasoning for Combinatorial Optimisation
Solving NP-hard/complete combinatorial problems with neural networks is a
challenging research area that aims to surpass classical approximate
algorithms. The long-term objective is to outperform hand-designed heuristics
for NP-hard/complete problems by learning to generate superior solutions solely
from training data. The Travelling Salesman Problem (TSP) is a prominent
combinatorial optimisation problem often targeted by such approaches. However,
current neural-based methods for solving TSP often overlook the inherent
"algorithmic" nature of the problem. In contrast, heuristics designed for TSP
frequently leverage well-established algorithms, such as those for finding the
minimum spanning tree. In this paper, we propose leveraging recent advancements
in neural algorithmic reasoning to improve the learning of TSP problems.
Specifically, we suggest pre-training our neural model on relevant algorithms
before training it on TSP instances. Our results demonstrate that, using this
learning setup, we achieve superior performance compared to non-algorithmically
informed deep learning models
Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis
Graph neural networks (GNNs) are highly effective on a variety of graph-related tasks; however, they lack interpretability and transparency. Current explainability approaches are typically local and treat GNNs as black-boxes. They do not look inside the model, inhibiting human trust in the model and explanations. Motivated by the ability of neurons to detect high-level semantic concepts in vision models, we perform a novel analysis on the behaviour of individual GNN neurons to answer questions about GNN interpretability. We propose a novel approach for producing global explanations for GNNs using neuron-level concepts to enable practitioners to have a high-level view of the model. Specifically, (i) to the best of our knowledge, this is the first work which shows that GNN neurons act as concept detectors and have strong alignment with concepts formulated as logical compositions of node degree and neighbourhood properties; (ii) we quantitatively assess the importance of detected concepts, and identify a trade-off between training duration and neuron-level interpretability; (iii) we demonstrate that our global explainability approach has advantages over the current state-of-the-art -- we can disentangle the explanation into individual interpretable concepts backed by logical descriptions, which reduces potential for bias and improves user-friendliness